A Multi-Scaling Reinforcement Learning Trading System Based on Multi-Scaling Convolutional Neural Networks
نویسندگان
چکیده
Advancements in machine learning have led to an increased interest applying deep reinforcement techniques investment decision-making problems. Despite this, existing approaches often rely solely on single-scaling daily data, neglecting the importance of multi-scaling information, such as weekly or monthly processes. To address this limitation, a convolutional neural network for learning-based stock trading, termed SARSA (state, action, reward, state, action), is proposed. Our method utilizes obtain features and financial data automatically. This involves using with several filter sizes perform extraction temporal features. Multiple-scaling feature mining allows agents operate over longer time scaling, identifying low positions line avoiding fluctuations during continuous declines. mimics human approach considering information at varying spatial scaling trading. We further enhance network’s robustness by adding average pooling layer backbone network, reducing overfitting. State, on-policy method, generates dynamic trading strategies that combine across different while dangerous strategies. evaluate effectiveness our proposed four real-world datasets (Dow Jones, NASDAQ, General Electric, AAPLE) spanning from 1 January 2007 31 December 2020, demonstrate its superior profits compared baseline methods. In addition, we various comparative ablation tests order superiority architecture. Through these experiments, module yields better results module.
منابع مشابه
A new 2D block ordering system for wavelet-based multi-resolution up-scaling
A complete and accurate analysis of the complex spatial structure of heterogeneous hydrocarbon reservoirs requires detailed geological models, i.e. fine resolution models. Due to the high computational cost of simulating such models, single resolution up-scaling techniques are commonly used to reduce the volume of the simulated models at the expense of losing the precision. Several multi-scale ...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملScaling reinforcement learning to the unconstrained multi-agent domain
Scaling Reinforcement Learning to the Unconstrained Multi-Agent Domain. (August 2007) Victor Palmer, B.S., Lubbock Christian University Chair of Advisory Committee: Dr. Thomas Ioerger Reinforcement learning is a machine learning technique designed to mimic the way animals learn by receiving rewards and punishment. It is designed to train intelligent agents when very little is known about the ag...
متن کاملReinforcement Learning in Multi-Party Trading Dialog
In this paper, we apply reinforcement learning (RL) to a multi-party trading scenario where the dialog system (learner) trades with one, two, or three other agents. We experiment with different RL algorithms and reward functions. The negotiation strategy of the learner is learned through simulated dialog with trader simulators. In our experiments, we evaluate how the performance of the learner ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematics
سال: 2023
ISSN: ['2227-7390']
DOI: https://doi.org/10.3390/math11112467